Parallel Python

Part 2: Asynchronous Functions and Futures

The Pool.map function allows you to map a single function across an entire list of data. But what if you want to apply lots of different functions? The solution is to tell individual workers to run different functions, by applying functions to workers.

The Pool class comes with the function apply. This is used to tell one process in the worker pool to run a specified function. For example, create a new script called poolapply.py and type into it:

poolapply.py
import time
from multiprocessing import Pool, current_process

def slow_function(nsecs):
    """
    Function that sleeps for 'nsecs' seconds, returning
    the number of seconds that it slept
    """

    print(f"Process {current_process().pid} going to sleep for {nsecs} second(s)")

    # use the time.sleep function to sleep for nsecs seconds
    time.sleep(nsecs)

    print(f"Process {current_process().pid} waking up")

    return nsecs

if __name__ == "__main__":
    print(f"Master process is PID {current_process().pid}")

    with Pool() as pool:
        r = pool.apply(slow_function, [5])

    print(f"Result is {r}")

Run this script:

$
python poolapply.py
Master process is PID 8997
Process 8998 going to sleep for 5 second(s)
Process 8998 waking up
Result is 5

You should see the something like this printed to the screen (with a delay of five seconds when the worker process sleeps).

The key line in this script is:

r = pool.apply(slow_function, [5])

The pool.apply function will request that one of the workers in the pool should run the passed function (in this case slow_function), with the arguments passed to the function held in the list (in this case [5]). The pool.apply function will wait until the passed function has finished, and will return the result of that function (here copied into r).

The arguments to the applied function must be placed into a list. This is the case, even if the applied function has just a single argument (i.e. this is why we had to write [5] rather than just 5. The list of arguments must contain the same number of arguments as needed by the applied function, in the same order as declared in the function. For example, edit your poolapply.py function to read:

poolapply.py
import time
from multiprocessing import Pool, current_process

def slow_add(nsecs, x, y):
    """
    Function that sleeps for 'nsecs' seconds, and
    then returns the sum of x and y
    """
    print(f"Process {current_process().pid} going to sleep for {nsecs} second(s)")

    time.sleep(nsecs)

    print(f"Process {current_process().pid} waking up")

    return x + y

if __name__ == "__main__":
    print(f"Master process is PID {current_process().pid}")

    with Pool() as pool:
        r = pool.apply(slow_add, [1, 6, 7])

    print(f"Result is {r}")

Here we have edited slow_function to slow_add, with this function accepting three arguments. These three arguments are passed using the list in pool.apply(slow_add, [1, 6, 7]).

Running this script using should give output similar to:

$
python poolapply.py
Master process is PID 10125
Process 10126 going to sleep for 1 second(s)
Process 10126 waking up
Result is 13

Asynchronous Functions

A major problem of Pool.apply is that the master process is blocked until the worker has finished processing the applied function. This is obviously an issue if you want to run multiple applied functions in parallel!

Fortunately, Pool has an apply_async function. This is an asynchronous version of apply that applies the function in a worker process, but without blocking the master. Create a new python script called applyasync.py and copy into it:

applyasync.py
import time
from multiprocessing import Pool, current_process

def slow_add(nsecs, x, y):
    """
    Function that sleeps for 'nsecs' seconds, and
    then returns the sum of x and y
    """
    print(f"Process {current_process().pid} going to sleep for {nsecs} second(s)")

    time.sleep(nsecs)

    print(f"Process {current_process().pid} waking up")

    return x + y

if __name__ == "__main__":
    print(f"Master process is PID {current_process().pid}")

    with Pool() as pool:
        r1 = pool.apply_async(slow_add, [1, 6, 7])
        r2 = pool.apply_async(slow_add, [1, 2, 3])

        r1.wait()
        print(f"Result one is {r1.get()}")

        r2.wait()
        print(f"Result two is {r2.get()}")

Running this script using should give output similar to:

$
python applyasync.py
Master process is PID 11013
Process 11021 going to sleep for 1 second(s)
Process 11020 going to sleep for 1 second(s)
Process 11020 waking up
Process 11021 waking up
Result one is 13
Result two is 5

The keys lines of this script are

r1 = pool.apply_async(slow_add, [1, 6, 7])
r2 = pool.apply_async(slow_add, [1, 2, 3])

The apply_async function is identical to apply, except that it returns control to the master process immediately. This means that the master process is free to continue working (e.g. here, it apply_asyncs a second slow_add function). In this case, it allows us to run two slow_sums in parallel. Most noticeably here, even though each function call took one second to run, the whole program did not take two seconds. Due to running them in parallel, it finished the whole program in just over one second.

Futures

An issue with running a function asynchronously is that the return value of the function is not available immediately. This means that, when running an asynchronous function, you don’t get the return value directly. Instead, apply_async returns a placeholder for the return value. This placeholder is called a “future”, and is a variable that in the future will be given the result of the function.

Futures are a very common variable type in parallel programming across many languages. Futures provide several common functions:

  • Block (wait) until the result is available. In multiprocessing, this is via the .wait() function, e.g. r1.wait() in the above script.
  • Retrieve the result when it is available (blocking until it is available). This is the .get() function, e.g. r1.get().
  • Test whether or not the result is available. This is the .ready() function, which returns True when the asynchronous function has finished and the result is available via .get().
  • Test whether or not the function was a success, e.g. whether or not an exception was raised when running the function. This is the .successful() function, which returns True if the asynchronous function completed without raising an exception. Note that this function should only be called after the result is available (e.g. when .ready() returns True).

In the above example, r1 and r2 were both futures for the results of the two asynchronous calls of slow_sum. The two slow_sum calls were processed by two worker processes. The master process was then blocked using r1.wait() to wait for the result of the first call, and then blocked using r2.wait() to wait or the result of the second call.

(note that we had to wait for all of the results to be delivered to our futures before we exited the with block, or else the pool of workers could be destroyed before the functions have completed and the results are available)

We can explore this more using the following example. Create a script called future.py and copy into it:

future.py
import time
from multiprocessing import Pool

def slow_add(nsecs, x, y):
    """
    Function that sleeps for 'nsecs' seconds, and
    then returns the sum of x and y
    """
    time.sleep(nsecs)
    return x + y

def slow_diff(nsecs, x, y):
    """
    Function that sleeps for 'nsecs' seconds, and
    then retruns the difference of x and y
    """
    time.sleep(nsecs)
    return x - y

def broken_function(nsecs):
    """Function that deliberately raises an AssertationError"""
    time.sleep(nsecs)
    raise ValueError("Called broken function")

if __name__ == "__main__":
    futures = []

    with Pool() as pool:
        futures.append(pool.apply_async(slow_add, [3, 6, 7]))
        futures.append(pool.apply_async(slow_diff, [2, 5, 2]))
        futures.append(pool.apply_async(slow_add, [1, 8, 1]))
        futures.append(pool.apply_async(slow_diff, [5, 9, 2]))
        futures.append(pool.apply_async(broken_function, [4]))

        while True:
            all_finished = True

            print("\nHave the workers finished?")

            for i, future in enumerate(futures):
                if future.ready():
                    print(f"Process {i} has finished")
                else:
                    all_finished = False
                    print(f"Process {i} is running...")

            if all_finished:
                break

            time.sleep(1)

        print("\nHere are the results.")

        for i, future in enumerate(futures):
            if future.successful():
                print(f"Process {i} was successful. Result is {future.get()}")
            else:
                print(f"Process {i} failed!")

                try:
                    future.get()
                except Exception as e:
                    print(f"    Error = {type(e)} : {e}")

Running this script using should give output similar to:

$
python future.py
Have the workers finished?
Process 0 is running...
Process 1 is running...
Process 2 is running...
Process 3 is running...
Process 4 is running...

Have the workers finished?
Process 0 is running...
Process 1 is running...
Process 2 is running...
Process 3 is running...
Process 4 is running...

Have the workers finished?
Process 0 is running...
Process 1 is running...
Process 2 has finished
Process 3 is running...
Process 4 is running...

Have the workers finished?
Process 0 has finished
Process 1 has finished
Process 2 has finished
Process 3 is running...
Process 4 is running...

Have the workers finished?
Process 0 has finished
Process 1 has finished
Process 2 has finished
Process 3 is running...
Process 4 is running...

Have the workers finished?
Process 0 has finished
Process 1 has finished
Process 2 has finished
Process 3 is running...
Process 4 is running...

Have the workers finished?
Process 0 has finished
Process 1 has finished
Process 2 has finished
Process 3 has finished
Process 4 has finished

Here are the results.
Process 0 was successful. Result is 13
Process 1 was successful. Result is 3
Process 2 was successful. Result is 9
Process 3 was successful. Result is 7
Process 4 failed!
    Error = <class 'ValueError'> : Called broken function

Is this output that you expected? Note that the exception raised by broken_function is held safely in its associated future. This is indicated by .successful() returning False, thereby allowing us to handle the exception in a try...except block that is put around the .get() function (if you .get() a future that contains an exception, then that exception is raised).

Exercise

Edit the future.py script so that you can control the number of workers in the pool using a command line argument (e.g. using Pool(processes=int(sys.argv[1])) rather than Pool()).

Edit the script to add calls to more asynchronous functions.

Then experiment with running the script with different numbers of processes in the pool and with different numbers of asynchronous function calls.

How are the asynchronous function calls distributed across the pool of worker processes?